The search functionality is under construction.

Keyword Search Result

[Keyword] neural networks(287hit)

101-120hit(287hit)

  • Hybrid Quaternionic Hopfield Neural Network

    Masaki KOBAYASHI  

     
    PAPER-Nonlinear Problems

      Vol:
    E98-A No:7
      Page(s):
    1512-1518

    In recent years, applications of complex-valued neural networks have become wide spread. Quaternions are an extension of complex numbers, and neural networks with quaternions have been proposed. Because quaternion algebra is non-commutative algebra, we can consider two orders of multiplication to calculate weighted input. However, both orders provide almost the same performance. We propose hybrid quaternionic Hopfield neural networks, which have both orders of multiplication. Using computer simulations, we show that these networks outperformed conventional quaternionic Hopfield neural networks in noise tolerance. We discuss why hybrid quaternionic Hopfield neural networks improve noise tolerance from the standpoint of rotational invariance.

  • Illumination Modeling Method for Office Lighting Control by Using RBFNN

    Wa SI  Xun PAN  Harutoshi OGAI  Katsumi HIRAI  Noriyoshi YAMAUCHI  Tansheng LI  

     
    PAPER-Biocybernetics, Neurocomputing

      Vol:
    E97-D No:12
      Page(s):
    3192-3200

    This paper represents an illumination modeling method for lighting control which can model the illumination distribution inside office buildings. The algorithm uses data from the illumination sensors to train Radial Basis Function Neural Networks (RBFNN) which can be used to calculate 1) the illuminance contribution from each luminaire to different positions in the office 2) the natural illuminance distribution inside the office. This method can be used to provide detailed illumination contribution from both artificial and natural light sources for lighting control algorithms by using small amount of sensors. Simulations with DIALux are made to prove the feasibility and accuracy of the modeling method.

  • Complex-Valued Bipartite Auto-Associative Memory

    Yozo SUZUKI  Masaki KOBAYASHI  

     
    PAPER-Nonlinear Problems

      Vol:
    E97-A No:8
      Page(s):
    1680-1687

    Complex-valued Hopfield associative memory (CHAM) is one of the most promising neural network models to deal with multilevel information. CHAM has an inherent property of rotational invariance. Rotational invariance is a factor that reduces a network's robustness to noise, which is a critical problem. Here, we proposed complex-valued bipartite auto-associative memory (CBAAM) to solve this reduction in noise robustness. CBAAM consists of two layers, a visible complex-valued layer and an invisible real-valued layer. The invisible real-valued layer prevents rotational invariance and the resulting reduction in noise robustness. In addition, CBAAM has high parallelism, unlike CHAM. By computer simulations, we show that CBAAM is superior to CHAM in noise robustness. The noise robustness of CHAM decreased as the resolution factor increased. On the other hand, CBAAM provided high noise robustness independent of the resolution factor.

  • An Approach for Sound Source Localization by Complex-Valued Neural Network

    Hirofumi TSUZUKI  Mauricio KUGLER  Susumu KUROYANAGI  Akira IWATA  

     
    PAPER-Biocybernetics, Neurocomputing

      Vol:
    E96-D No:10
      Page(s):
    2257-2265

    This paper presents a Complex-Valued Neural Network-based sound localization method. The proposed approach uses two microphones to localize sound sources in the whole horizontal plane. The method uses time delay and amplitude difference to generate a set of features which are then classified by a Complex-Valued Multi-Layer Perceptron. The advantage of using complex values is that the amplitude information can naturally masks the phase information. The proposed method is analyzed experimentally with regard to the spectral characteristics of the target sounds and its tolerance to noise. The obtained results emphasize and confirm the advantages of using Complex-Valued Neural Networks for the sound localization problem in comparison to the traditional Real-Valued Neural Network model.

  • Learning of Simple Dynamic Binary Neural Networks

    Ryota KOUZUKI  Toshimichi SAITO  

     
    PAPER-Neural Networks and Bioengineering

      Vol:
    E96-A No:8
      Page(s):
    1775-1782

    This paper studies the simple dynamic binary neural network characterized by the signum activation function, ternary weighting parameters and integer threshold parameters. The network can be regarded as a digital version of the recurrent neural network and can output a variety of binary periodic orbits. The network dynamics can be simplified into a return map, from a set of lattice points, to itself. In order to store a desired periodic orbit, we present two learning algorithms based on the correlation learning and the genetic algorithm. The algorithms are applied to three examples: a periodic orbit corresponding to the switching signal of the dc-ac inverter and artificial periodic orbit. Using the return map, we have investigated the storage of the periodic orbits and stability of the stored periodic orbits.

  • Self-Organizing Incremental Associative Memory-Based Robot Navigation

    Sirinart TANGRUAMSUB  Aram KAWEWONG  Manabu TSUBOYAMA  Osamu HASEGAWA  

     
    PAPER-Information Network

      Vol:
    E95-D No:10
      Page(s):
    2415-2425

    This paper presents a new incremental approach for robot navigation using associative memory. We defined the association as node→action→node where node is the robot position and action is the action of a robot (i.e., orientation, direction). These associations are used for path planning by retrieving a sequence of path fragments (in form of (node→action→node) → (node→action→node) →…) starting from the start point to the goal. To learn such associations, we applied the associative memory using Self-Organizing Incremental Associative Memory (SOIAM). Our proposed method comprises three layers: input layer, memory layer and associative layer. The input layer is used for collecting input observations. The memory layer clusters the obtained observations into a set of topological nodes incrementally. In the associative layer, the associative memory is used as the topological map where nodes are associated with actions. The advantages of our method are that 1) it does not need prior knowledge, 2) it can process data in continuous space which is very important for real-world robot navigation and 3) it can learn in an incremental unsupervised manner. Experiments are done with a realistic robot simulator: Webots. We divided the experiments into 4 parts to show the ability of creating a map, incremental learning and symbol-based recognition. Results show that our method offers a 90% success rate for reaching the goal.

  • Dynamical Associative Memory: The Properties of the New Weighted Chaotic Adachi Neural Network

    Guangchun LUO  Jinsheng REN  Ke QIN  

     
    LETTER-Biocybernetics, Neurocomputing

      Vol:
    E95-D No:8
      Page(s):
    2158-2162

    A new training algorithm for the chaotic Adachi Neural Network (AdNN) is investigated. The classical training algorithm for the AdNN and it's variants is usually a “one-shot” learning, for example, the Outer Product Rule (OPR) is the most used. Although the OPR is effective for conventional neural networks, its effectiveness and adequateness for Chaotic Neural Networks (CNNs) have not been discussed formally. As a complementary and tentative work in this field, we modified the AdNN's weights by enforcing an unsupervised Hebbian rule. Experimental analysis shows that the new weighted AdNN yields even stronger dynamical associative memory and pattern recognition phenomena for different settings than the primitive AdNN.

  • A Simple Class of Binary Neural Networks and Logical Synthesis

    Yuta NAKAYAMA  Ryo ITO  Toshimichi SAITO  

     
    LETTER-Nonlinear Problems

      Vol:
    E94-A No:9
      Page(s):
    1856-1859

    This letter studies learning of the binary neural network and its relation to the logical synthesis. The network has the signum activation function and can approximate a desired Boolean function if parameters are selected suitably. In a parameter subspace the network is equivalent to the disjoint canonical form of the Boolean functions. Outside of the subspace, the network can have simpler structure than the canonical form where the simplicity is measured by the number of hidden neurons. In order to realize effective parameter setting, we present a learning algorithm based on the genetic algorithm. The algorithm uses the teacher signals as the initial kernel and tolerates a level of learning error. Performing basic numerical experiments, the algorithm efficiency is confirmed.

  • Noise Robust Gradient Descent Learning for Complex-Valued Associative Memory

    Masaki KOBAYASHI  Hirofumi YAMADA  Michimasa KITAHARA  

     
    LETTER-Nonlinear Problems

      Vol:
    E94-A No:8
      Page(s):
    1756-1759

    Complex-valued Associative Memory (CAM) is an advanced model of Hopfield Associative Memory. The CAM is based on multi-state neurons and has the high ability of representation. Lee proposed gradient descent learning for the CAM to improve the storage capacity. It is based on only the phases of input signals. In this paper, we propose another type of gradient descent learning based on both the phases and the amplitude. The proposed learning method improves the noise robustness and accelerates the learning speed.

  • Reasoning on the Self-Organizing Incremental Associative Memory for Online Robot Path Planning

    Aram KAWEWONG  Yutaro HONDA  Manabu TSUBOYAMA  Osamu HASEGAWA  

     
    PAPER-Artificial Intelligence and Cognitive Science

      Vol:
    E93-D No:3
      Page(s):
    569-582

    Robot path-planning is one of the important issues in robotic navigation. This paper presents a novel robot path-planning approach based on the associative memory using Self-Organizing Incremental Neural Networks (SOINN). By the proposed method, an environment is first autonomously divided into a set of path-fragments by junctions. Each fragment is represented by a sequence of preliminarily generated common patterns (CPs). In an online manner, a robot regards the current path as the associative path-fragments, each connected by junctions. The reasoning technique is additionally proposed for decision making at each junction to speed up the exploration time. Distinct from other methods, our method does not ignore the important information about the regions between junctions (path-fragments). The resultant number of path-fragments is also less than other method. Evaluation is done via Webots physical 3D-simulated and real robot experiments, where only distance sensors are available. Results show that our method can represent the environment effectively; it enables the robot to solve the goal-oriented navigation problem in only one episode, which is actually less than that necessary for most of the Reinforcement Learning (RL) based methods. The running time is proved finite and scales well with the environment. The resultant number of path-fragments matches well to the environment.

  • Stochastic Resonance in an Array of Locally-Coupled McCulloch-Pitts Neurons with Population Heterogeneity

    Akira UTAGAWA  Tohru SAHASHI  Tetsuya ASAI  Yoshihito AMEMIYA  

     
    PAPER-Nonlinear Problems

      Vol:
    E92-A No:10
      Page(s):
    2508-2513

    We found a new class of stochastic resonance (SR) in a simple neural network that consists of i) photoreceptors generating nonuniform outputs for common inputs with random offsets, ii) an ensemble of noisy McCulloch-Pitts (MP) neurons each of which has random threshold values in the temporal domain, iii) local coupling connections between the photoreceptors and the MP neurons with variable receptive fields (RFs), iv) output cells, and v) local coupling connections between the MP neurons and the output cells. We calculated correlation values between the inputs and the outputs as a function of the RF size and intensities of the random components in photoreceptors and the MP neurons. We show the existence of "optimal noise intensities" of the MP neurons under the nonidentical photoreceptors and "nonzero optimal RF sizes," which indicated that optimal correlation values of this SR model were determined by two critical parameters; noise intensities (well-known) and RF sizes as a new parameter.

  • An Immunity-Based RBF Network and Its Application in Equalization of Nonlinear Time-Varying Channels

    Xiaogang ZANG  Xinbao GONG  Ronghong JIN  Xiaofeng LING  Bin TANG  

     
    LETTER-Neural Networks and Bioengineering

      Vol:
    E92-A No:5
      Page(s):
    1390-1394

    This paper proposes a novel RBF training algorithm based on immune operations for dynamic problem solving. The algorithm takes inspiration from the dynamic nature of natural immune system and locally-tuned structure of RBF neural network. Through immune operations of vaccination and immune response, the RBF network can dynamically adapt to environments according to changes in the training set. Simulation results demonstrate that RBF equalizer based on the proposed algorithm obtains good performance in nonlinear time-varying channels.

  • Maximum-Flow Neural Network: A Novel Neural Network for the Maximum Flow Problem

    Masatoshi SATO  Hisashi AOMORI  Mamoru TANAKA  

     
    PAPER

      Vol:
    E92-A No:4
      Page(s):
    945-951

    In advance of network communication society by the internet, the way how to send data fast with a little loss becomes an important transportation problem. A generalized maximum flow algorithm gives the best solution for the transportation problem that which route is appropriated to exchange data. Therefore, the importance of the maximum flow algorithm is growing more and more. In this paper, we propose a Maximum-Flow Neural Network (MF-NN) in which branch nonlinearity has a saturation characteristic and by which the maximum flow problem can be solved with analog high-speed parallel processing. That is, the proposed neural network for the maximum flow problem can be realized by a nonlinear resistive circuit where each connection weight between nodal neurons has a sigmodal or piece-wise linear function. The parallel hardware of the MF-NN will be easily implemented.

  • Data Gathering Scheme Using Chaotic Pulse-Coupled Neural Networks for Wireless Sensor Networks

    Hidehiro NAKANO  Akihide UTANI  Arata MIYAUCHI  Hisao YAMAMOTO  

     
    PAPER-Nonlinear Problems

      Vol:
    E92-A No:2
      Page(s):
    459-466

    Wireless sensor networks (WSNs) have attracted a significant amount of interest from many researchers because they have great potential as a means of obtaining information of various environments remotely. WSNs have a wide range of applications, such as natural environmental monitoring in forest regions and environmental control in office buildings. In WSNs, hundreds or thousands of micro-sensor nodes with such resource limitations as battery capacity, memory, CPU, and communication capacity are deployed without control in a region and used to monitor and gather sensor information of environments. Therefore, a scalable and efficient network control and/or data gathering scheme for saving energy consumption of each sensor node is needed to prolong WSN lifetime. In this paper, assuming that sensor nodes synchronize to intermittently communicate with each other only when they are active for realizing the long-term employment of WSNs, we propose a new synchronization scheme for gathering sensor information using chaotic pulse-coupled neural networks (CPCNN). We evaluate the proposed scheme using computer simulations and discuss its development potential. In simulation experiments, the proposed scheme is compared with a previous synchronization scheme based on a pulse-coupled oscillator model to verify its effectiveness.

  • Durability of Affordable Neural Networks against Damaging Neurons

    Yoko UWATE  Yoshifumi NISHIO  Ruedi STOOP  

     
    PAPER-Neural Networks and Bioengineering

      Vol:
    E92-A No:2
      Page(s):
    585-593

    Durability describes the ability of a device to operate properly in imperfect conditions. We have recently proposed a novel neural network structure called an "Affordable Neural Network" (AfNN), in which affordable neurons of the hidden layer are considered as the elements responsible for the robustness property as is observed in human brain function. Whereas earlier we have shown that AfNNs can still generalize and learn, here we show that these networks are robust against damages occurring after the learning process has terminated. The results support the view that AfNNs embody the important feature of durability. In our contribution, we investigate the durability of the AfNN when some of the neurons in the hidden layer are damaged after the learning process.

  • Fast Simulation Technique of Plane Circuits via Two-Layer CNN-Based Modeling

    Yuichi TANJI  Hideki ASAI  Masayoshi ODA  Yoshifumi NISHIO  Akio USHIDA  

     
    PAPER-Nonlinear Problems

      Vol:
    E91-A No:12
      Page(s):
    3757-3762

    A fast time-domain simulation technique of plane circuits via two-layer Cellular Neural Network (CNN)-based modeling, which is necessary for power/signal integrity evaluation in VLSIs, printed circuit boards, and packages, is presented. Using the new notation expressed by the two-layer CNN, 1,553 times faster simulation is achieved, compared with Berkeley SPICE (ngspice). In CNN community, CNNs are generally simulated by explicit numerical integration such as the forward Euler and Runge-Kutta methods. However, since the two-layer CNN is a stiff circuit, we cannot analyze it by using an explicit numerical integration method. Hence, to analyze the two-layer CNN and reduce the computational cost, the leapfrog method is introduced. This procedure would open an application of CNN to electronic design automation area.

  • Transient Stability Enhancement of Power Systems by Lyapunov- Based Recurrent Neural Networks UPFC Controllers

    Chia-Chi CHU  Hung-Chi TSAI  Wei-Neng CHANG  

     
    PAPER-Control and Optimization

      Vol:
    E91-A No:9
      Page(s):
    2497-2506

    A Lyapunov-based recurrent neural networks unified power flow controller (UPFC) is developed for improving transient stability of power systems. First, a simple UPFC dynamical model, composed of a controllable shunt susceptance on the shunt side and an ideal complex transformer on the series side, is utilized to analyze UPFC dynamical characteristics. Secondly, we study the control configuration of the UPFC with two major blocks: the primary control, and the supplementary control. The primary control is implemented by standard PI techniques when the power system is operated in a normal condition. The supplementary control will be effective only when the power system is subjected by large disturbances. We propose a new Lyapunov-based UPFC controller of the classical single-machine-infinite-bus system for damping enhancement. In order to consider more complicated detailed generator models, we also propose a Lyapunov-based adaptive recurrent neural network controller to deal with such model uncertainties. This controller can be treated as neural network approximations of Lyapunov control actions. In addition, this controller also provides online learning ability to adjust the corresponding weights with the back propagation algorithm built in the hidden layer. The proposed control scheme has been tested on two simple power systems. Simulation results demonstrate that the proposed control strategy is very effective for suppressing power swing even under severe system conditions.

  • Small Number of Hidden Units for ELM with Two-Stage Linear Model

    Hieu Trung HUYNH  Yonggwan WON  

     
    PAPER-Data Mining

      Vol:
    E91-D No:4
      Page(s):
    1042-1049

    The single-hidden-layer feedforward neural networks (SLFNs) are frequently used in machine learning due to their ability which can form boundaries with arbitrary shapes if the activation function of hidden units is chosen properly. Most learning algorithms for the neural networks based on gradient descent are still slow because of the many learning steps. Recently, a learning algorithm called extreme learning machine (ELM) has been proposed for training SLFNs to overcome this problem. It randomly chooses the input weights and hidden-layer biases, and analytically determines the output weights by the matrix inverse operation. This algorithm can achieve good generalization performance with high learning speed in many applications. However, this algorithm often requires a large number of hidden units and takes long time for classification of new observations. In this paper, a new approach for training SLFNs called least-squares extreme learning machine (LS-ELM) is proposed. Unlike the gradient descent-based algorithms and the ELM, our approach analytically determines the input weights, hidden-layer biases and output weights based on linear models. For training with a large number of input patterns, an online training scheme with sub-blocks of the training set is also introduced. Experimental results for real applications show that our proposed algorithm offers high classification accuracy with a smaller number of hidden units and extremely high speed in both learning and testing.

  • Fuzzy Rule Extraction from Dynamic Data for Voltage Risk Identification

    Chen-Sung CHANG  

     
    PAPER-Artificial Intelligence and Cognitive Science

      Vol:
    E91-D No:2
      Page(s):
    277-285

    This paper presents a methodology for performing on-line voltage risk identification (VRI) in power supply networks using hyperrectangular composite neural networks (HRCNNs) and synchronized phasor measurements. The FHRCNN presented in this study integrates the paradigm of neural networks with the concept of knowledge-based approaches, rendering them both more useful than when applied alone. The fuzzy rules extracted from the dynamic data relating to the power system formalize the knowledge applied by experts when conducting the voltage risk assessment procedure. The efficiency of the proposed technique is demonstrated via its application to the Taiwan Power Provider System (Tai-Power System) under various operating conditions. Overall, the results indicated that the proposed scheme achieves a minimum 97 % success rate in determining the current voltage security level.

  • Autonomous and Decentralized Optimization of Large-Scale Heterogeneous Wireless Networks by Neural Network Dynamics

    Mikio HASEGAWA  Ha Nguyen TRAN  Goh MIYAMOTO  Yoshitoshi MURATA  Hiroshi HARADA  Shuzo KATO  

     
    PAPER-Distributed Optimization

      Vol:
    E91-B No:1
      Page(s):
    110-118

    We propose a neurodynamical approach to a large-scale optimization problem in Cognitive Wireless Clouds, in which a huge number of mobile terminals with multiple different air interfaces autonomously utilize the most appropriate infrastructure wireless networks, by sensing available wireless networks, selecting the most appropriate one, and reconfiguring themselves with seamless handover to the target networks. To deal with such a cognitive radio network, game theory has been applied in order to analyze the stability of the dynamical systems consisting of the mobile terminals' distributed behaviors, but it is not a tool for globally optimizing the state of the network. As a natural optimization dynamical system model suitable for large-scale complex systems, we introduce the neural network dynamics which converges to an optimal state since its property is to continually decrease its energy function. In this paper, we apply such neurodynamics to the optimization problem of radio access technology selection. We compose a neural network that solves the problem, and we show that it is possible to improve total average throughput simply by using distributed and autonomous neuron updates on the terminal side.

101-120hit(287hit)